7 research outputs found

    An architecture for an ATM network continuous media server exploiting temporal locality of access

    Get PDF
    With the continuing drop in the price of memory, Video-on-Demand (VoD) solutions that have so far focused on maximising the throughput of disk units with a minimal use of physical memory may now employ significant amounts of cache memory. The subject of this thesis is the study of a technique to best utilise a memory buffer within such a VoD solution. In particular, knowledge of the streams active on the server is used to allocate cache memory. Stream optimised caching exploits reuse of data among streams that are temporally close to each other within the same clip; the data fetched on behalf of the leading stream may be cached and reused by the following streams. Therefore, only the leading stream requires access to the physical disk and the potential level of service provision allowed by the server may be increased. The use of stream optimised caching may consequently be limited to environments where reuse of data is significant. As such, the technique examined within this thesis focuses on a classroom environment where user progress is generally linear and all users progress at approximately the same rate for such an environment, reuse of data is guaranteed. The analysis of stream optimised caching begins with a detailed theoretical discussion of the technique and suggests possible implementations. Later chapters describe both the design and construction of a prototype server that employs the caching technique, and experiments that use of the prototype to assess the effectiveness of the technique for the chosen environment using `emulated' users. The conclusions of these experiments indicate that stream optimised caching may be applicable to larger scale VoD systems than small scale teaching environments. Future development of stream optimised caching is considered

    An Architecture for an ATM Network Continuous Media Server Exploiting Temporal Locality of Access

    No full text
    With the continuing drop in the price of memory, Video-on-Demand (VoD) solutions that have so far focused on maximising the throughput of disk units with a minimal use of physical memory may now employ significant amounts of cache memory. The subject of this thesis is the study of a technique to best utilise a memory buffer within such a VoD solution. In particular, knowledge of the streams active on the server is used to allocate cache memory. Stream optimised caching exploits reuse of data among streams that are temporally close to each other within the same clip; the data fetched on behalf of the leading stream may be cached and reused by the following streams. Therefore, only the leading stream requires access to the physical disk and the potential level of service provision allowed by the server may be increased. The use of stream optimised caching may consequently be limited to environments where reuse of data is significant. As such, the technique examined within this thesis focuses on a classroom environment where user progress is generally linear and all users progress at approximately the same rate for such an environment, reuse of data is guaranteed. The analysis of stream optimised caching begins with a detailed theoretical discussion of the technique and suggests possible implementations. Later chapters describe both the design and construction of a prototype server that employs the caching technique, and experiments that use of the prototype to assess the effectiveness of the technique for the chosen environment using `emulated' users. The conclusions of these experiments indicate that stream optimised caching may be applicable to larger scale VoD systems than small scale teaching environments. Future development of stream optimised caching is considered

    Experiences in implementing a real-time video filestore

    No full text
    Access to video and audio clips can enhance students' understanding of a complex subject. In a classroom environment, all the students will typically use the same media objects at approximately the same time. The majority of this usage will be simple playback with comparatively little recording. Hence, by caching the media data and taking advantage of the predictable nature of stream usage by the current clients, we may satisfy multiple requests for media data with only a single disk access and so alleviate the bottleneck of the disk subsystem. Consequently, it is possible to construct a multi-user continuous media server for use within this environment for a relatively small cost. From previous efforts to build such a server, it is clear that the bandwidth of a standard disk device is the major bottleneck preventing the supply of real-time video to multiple clients. Solutions based on a RAID disk architecture can achieve high aggregate bandwidth, but at a cost which is unattractive for a teaching environment. In this situation there is typically a small number of clips, each of which is a few minutes long. The solution presented here is very efficient on the small scale, but would be inappropriate for `video on demand' or video archives. The server's cache is managed through a knowledge of the activities of all current streams, viz. which file they each access, at what point on the time-line, at what speed and in which direction. By maintaining a global map of all current activity it is possible to predict the file data next in demand. In particular, we keep data resident which has been used by one stream, but will be needed by others very soon. Data used by a stream that has no predicted demand is discarded immediately. This technique gives the most optimal usage of cache space and much reduces the load placed on the disks. The server is designed to store any format of media data, either compressed or uncompressed; the exact encoding of the data is transparent to the server's media container format. To achieve format independence each media object consists of two files: i) the media data; and ii) the timing/synchronisation metadata. The metadata file also contains a header to allow the the clients to determine the format and type of the data, plus parameters such as the number of blocks and frames, the total playback duration, and so on. Each block of data in the media data file is encoded in a form ready for transmission over an ATM network without modification; the computationally expensive block error check data does not need to be calculated for each block of data transmitted, so minimising transmission latency and improving overall throughput. The server supports NFS allowing the media directory hierarchy to be searched and manipulated by any host able to act as an NFS client. The streams are initiated and controlled via an additional interface using Sun RPC protocol, allowing all basic functions such as setup, teardown, pause/continue, playback speed alterations and position changes. In addition, conversion daemons have been constructed which translate the Sun RPC protocol and allow integration with such systems as ANSAWare, CORBA and Microsoft RPC. This approach has the advantage of not over-complicating the video server and enabling incorporation with any client system. Thus, the video file server is available to any machine with an ATM network adapter and the appropriate driver software
    corecore